UniFaceGAN: A Unified Framework for Temporally Consistent Facial Video Editing

نویسندگان

چکیده

Recent research has witnessed advances in facial image editing tasks including face swapping and reenactment. However, these methods are confined to dealing with one specific task at a time. In addition, for video editing, previous either simply apply transformations frame by or utilize multiple frames concatenated iterative fashion, which leads noticeable visual flickers. this paper, we propose unified temporally consistent framework termed UniFaceGAN. Based on 3D reconstruction model simple yet efficient dynamic training sample selection mechanism, our is designed handle reenactment simultaneously. To enforce the temporal consistency, novel loss constraint introduced based barycentric coordinate interpolation. Besides, region-aware conditional normalization layer replace traditional AdaIN SPADE synthesize more context-harmonious results. Compared state-of-the-art methods, generates portraits that photo-realistic smooth.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Temporally Consistent Gradient Domain Video Editing

In the context of video editing, enforcing spatio-temporal consistency is an important issue. With that purpose, the current variational models for gradient domain video editing include space and time regularization terms. The spatial terms are based on the usual space derivatives, the temporal ones are based on the convective derivative, and both are balanced by a parameter β. However, the usu...

متن کامل

Temporally Consistent Wide Baseline Facial Performance Capture via Image Warping

In this paper, we present a method for detailed temporally consistent facial performance capture that supports any number of arbitrarily placed video cameras. Using a suitable 3D model as reference geometry, our method tracks facial movement and deformation as well as photometric changes due to illumination and shadows. In an analysis-by-synthesis framework, we warp one single reference image p...

متن کامل

Temporally Consistent Motion Segmentation from RGB-D Video

We present a method for temporally consistent motion segmentation from RGB-D videos assuming a piecewise rigid motion model. We formulate global energies over entire RGB-D sequences in terms of the segmentation of each frame into a number of objects, and the rigid motion of each object through the sequence. We develop a novel initialization procedure that clusters feature tracks obtained from t...

متن کامل

Online Temporally Consistent Indoor Depth Video Enhancement via Static Structure

• State-F: Forward Outliers p(dx|Zx,mx = 1) = Uf (dx|Zx) = Uf · 1[dtxZx]. For the purpose to combine all the three states into a united model and describe the overall likelihood that the input depth samples fit the current static structure, we use a mixture model similar to the Gaussian Mixture Model [1]. Together with ...

متن کامل

A unified shape editing framework based on tetrahedral control mesh

It is a fundamental but challenging problem to efficiently edit complex 3D objects. By embedding the input models into coarse tetrahedral control meshes, this paper develops a unified framework to discuss two useful editing operations: interactive deformation and deformation transfer. First, a new rigidity energy is proposed to make the tetrahedral control mesh deform as rigidly as possible, wh...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE transactions on image processing

سال: 2021

ISSN: ['1057-7149', '1941-0042']

DOI: https://doi.org/10.1109/tip.2021.3089909